机器学习在医疗保健中的应用通常需要处理时间到事实的预测任务,包括不良事件的预测,重新住院或死亡。由于失去随访,此类结果通常受到审查。标准的机器学习方法不能直接地应用于具有审查结果的数据集。在本文中,我们提出了Auton-Survival,这是一个开源存储库,用于简化审查的活动时间或生存数据的工具。Auton Survival包括用于生存回归的工具,存在域移位,反事实估计,风险分层的表型,评估以及治疗效果的估计。通过采用大量SEER肿瘤学发病率数据的现实世界案例研究,我们证明了Auton Survival迅速支持数据科学家在回答复杂健康和流行病学问题方面的能力。
translated by 谷歌翻译
现实世界中临床干预措施的治疗功效的估计涉及处理诸如死亡时间,重新住院或可能受到检查的复合事件之类的连续结果。在这种情况下,反事实推理需要将混杂的生理特征的影响与正在评估的干预措施的影响中影响基线存活率的影响。在本文中,我们提出了一种潜在变量方法来模拟异质治疗效果,该方法通过提出一个人可以属于具有不同响应特征的潜在簇之一。我们表明,这种潜在结构可以介导基本的生存率,并有助于确定干预的影响。我们证明了我们的方法根据个人对最初进行的多个大型随机临床试验的治疗反应来发现可行的表型的能力,该试验最初是为了评估适当的治疗方法以降低心血管风险。
translated by 谷歌翻译
由于存在抗抗,因此仅由于例如损失跟踪而仅部分已知的抗抗,因此仅存在抗抗,因此存在于回归建模的具有挑战性。这些问题经常在医疗应用中出现,使生存分析成为医疗保健的生物统计学和机器学习的关键努力,Cox回归模型是最常用的模型。我们描述了一种基于COX回归的学习混合物来模拟各个生存分布的生存分析回归模型的新方法。我们提出了对该模型的预期最大化算法的近似,该算法对混合组进行了艰难的分配,以进行优化效率。在每个组分配中,我们使用深神经网络的每个组内的危险比以及每个混合物组分非参数的基线危害。我们对多个现实世界数据集进行实验,并查看种族和性别患者的死亡率。我们强调了校准在医疗保健环境中的重要性,并证明我们的方法在鉴别性能和校准方面表明了古典和现代生存分析基线,在少数人口统计数据上具有大的收益。
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
With the increasing use of Graph Neural Networks (GNNs) in critical real-world applications, several post hoc explanation methods have been proposed to understand their predictions. However, there has been no work in generating explanations on the fly during model training and utilizing them to improve the expressive power of the underlying GNN models. In this work, we introduce a novel explanation-directed neural message passing framework for GNNs, EXPASS (EXplainable message PASSing), which aggregates only embeddings from nodes and edges identified as important by a GNN explanation method. EXPASS can be used with any existing GNN architecture and subgraph-optimizing explainer to learn accurate graph embeddings. We theoretically show that EXPASS alleviates the oversmoothing problem in GNNs by slowing the layer wise loss of Dirichlet energy and that the embedding difference between the vanilla message passing and EXPASS framework can be upper bounded by the difference of their respective model weights. Our empirical results show that graph embeddings learned using EXPASS improve the predictive performance and alleviate the oversmoothing problems of GNNs, opening up new frontiers in graph machine learning to develop explanation-based training frameworks.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
A classical result in learning theory shows the equivalence of PAC learnability of binary hypothesis classes and the finiteness of VC dimension. Extending this to the multiclass setting was an open problem, which was settled in a recent breakthrough result characterizing multiclass PAC learnability via the DS dimension introduced earlier by Daniely and Shalev-Shwartz. In this work we consider list PAC learning where the goal is to output a list of $k$ predictions. List learning algorithms have been developed in several settings before and indeed, list learning played an important role in the recent characterization of multiclass learnability. In this work we ask: when is it possible to $k$-list learn a hypothesis class? We completely characterize $k$-list learnability in terms of a generalization of DS dimension that we call the $k$-DS dimension. Generalizing the recent characterization of multiclass learnability, we show that a hypothesis class is $k$-list learnable if and only if the $k$-DS dimension is finite.
translated by 谷歌翻译
由于事后解释越来越多地用于了解图神经网络(GNN)的行为,因此评估GNN解释的质量和可靠性至关重要。但是,评估GNN解释的质量是具有挑战性的,因为现有的图形数据集对给定任务没有或不可靠的基础真相解释。在这里,我们介绍了一个合成图数据生成器ShapeGgen,该生成可以生成各种基准数据集(例如,不同的图形大小,度分布,同粒细胞与异性图)以及伴随着地面真相解释。此外,生成各种合成数据集和相应的基础真相解释的灵活性使我们能够模仿各种现实世界应用程序生成的数据。我们将ShapeGgen和几个现实图形数据集包括在开源图形图库GraphXai中。除了带有基础真相说明的合成和现实图形数据集外,GraphXAI还提供数据加载程序,数据处理功能,可视化器,GNN模型实现和评估指标,以基准基准GNN解释性方法的性能。
translated by 谷歌翻译
对话推荐系统(CRS)的注意力日益增长,该系统可作为基于对话和建议的以任务为基础的工具,以提供感兴趣的项目并探索用户偏好。但是,CRS中现有的工作未能向用户明确显示推理逻辑,并且整个CRS仍然是黑匣子。因此,我们提出了一个基于生成对话代理的解释,以解释他们为何采取行动的解释,提出了一个名为“解释建议”(EGCR)的新颖端到端框架。 EGCR结合了用户评论,以增强项目表示并提高整个对话的信息。据我们所知,这是对现实世界数据集上可解释的对话建议的第一个框架。此外,我们在一个基准的对话推荐数据集上评估了EGCR,并比其他最先进的模型在建议准确性和对话质量上获得更好的性能。最后,广泛的实验表明,生成的解释不仅具有高质量和解释性,而且使CRS更加值得信赖。我们将使我们的代码可为CRS社区做出贡献
translated by 谷歌翻译
随着推荐系统变得越来越复杂和复杂,它们通常会缺乏公平和透明度。为建议提供强大而公正的解释,人们越来越关注,因为它可以帮助解决这些问题并提高推荐系统的信任度和信息性。然而,尽管事实是为人类生成了这种解释,这些人类对具有适当情绪的信息做出更强烈反应,但在为建议解释时,人们缺乏对情绪的考虑。发现当前的解释生成模型可以夸大某些情绪,而无需准确捕获基本的语调或含义。在本文中,我们提出了一种基于多头变压器的新方法,称为“情感感知变压器”,以解释推荐(情感者),以产生更健壮,公平和情感增强的解释。为了衡量产生的解释的语言质量和情感公平性,我们采用自动文本指标和人类的看法进行评估。在具有多个评估指标的三个广泛使用基准数据集上进行的实验表明,情感者在文本质量,解释性和对情感分布的公平性方面始终优于现有的最新解释生成模型。 Emoter的实施将作为开源工具包发布,以支持进一步的研究。
translated by 谷歌翻译